1. About the project

This 7-week course, as described in Kimmo Vehkalahti’s blog, is

“…open for everyone willing to learn how to use RStudio, R Markdown and GitHub (state-of-the-art tools of data science) to visualise and analyse open data with multivariate statistical methods following principles and practices of reproducible research…”

It will be fantastic to learn some new \(R\) tricks, and also some new (to me) statistical methods.

Annukka’s GitHub repository


2. Regression and model validation

Below are all the codes, my interpretations and explanations for this week’s data analysis exercises.

A brief note about data wrangling that preceded this diary entry. After I filtered my dataset to exclude rows that had zero values for points, almost the whole dataset printed NA values. I found the following solution in StackOverflow:

a_dataset <- analysis_dataset[apply(analysis_dataset!=0, 1, all),]

I now have the correct number of observations and variables, but this method would exclude all other zero rows too I think…

To read the previously wrangled data from my local folder into R:

mydata <- read.csv("learning2014.csv")

The dataset is a combination of certain variables from a survey dataset. The survey examined the relationship between learning approaches and the achievements of students. All combination variables were scaled to the original scales. Observations where the exam points variable is zero (student did not sit the exam) were excluded to remove outliers. The (combination) variables included are:
- gender
- age
- attitude (Global attitude toward statistics)
- deep (measures deep learning)
- stra (measures strategic learning)
- surf (measures surface learning)
- points (points from the exam)

To begin the graphical overview of the data,I initialized the plot with data and aesthetic mappings, including colours by the variable gender, and showed summaries of the variables in the data:

library(ggplot2)  
p1 <- ggplot(mydata, aes(x = attitude, y = points, col = gender))  
p2 <- p1 + geom_point()  
p3 <- p2 + geom_smooth(method = "lm")
p3

Summary of the data is presented below.

summary(mydata)  
##        X          gender       age           attitude          deep      
##  Min.   :  1.00   F:110   Min.   :17.00   Min.   :1.400   Min.   :1.583  
##  1st Qu.: 44.25   M: 56   1st Qu.:21.00   1st Qu.:2.600   1st Qu.:3.333  
##  Median : 87.50           Median :22.00   Median :3.200   Median :3.667  
##  Mean   : 90.13           Mean   :25.51   Mean   :3.143   Mean   :3.680  
##  3rd Qu.:136.75           3rd Qu.:27.00   3rd Qu.:3.700   3rd Qu.:4.083  
##  Max.   :183.00           Max.   :55.00   Max.   :5.000   Max.   :4.917  
##                                                                          
##       stra            surf           points           X.1       
##  Min.   :1.250   Min.   :1.583   Min.   : 7.00   Min.   :22.72  
##  1st Qu.:2.625   1st Qu.:2.417   1st Qu.:19.00   1st Qu.:22.72  
##  Median :3.188   Median :2.833   Median :23.00   Median :22.72  
##  Mean   :3.121   Mean   :2.787   Mean   :22.72   Mean   :22.72  
##  3rd Qu.:3.625   3rd Qu.:3.167   3rd Qu.:27.75   3rd Qu.:22.72  
##  Max.   :5.000   Max.   :4.333   Max.   :33.00   Max.   :22.72  
##                                                  NA's   :165    
##       X.2       
##  Min.   :11.36  
##  1st Qu.:11.36  
##  Median :11.36  
##  Mean   :11.36  
##  3rd Qu.:11.36  
##  Max.   :11.36  
##  NA's   :165

The gender split of the respondents was 110 female and 56 male. Their ages ranged from 17 to 55, with an average age 25.51. The variable “attitude” is a combination variable of 10 variables and measures a global attitude toward statistics. The next three variables, deep approach (deep), strategic approach (stra) and surface approach (surf) are likewise combinatory variables and they have been scaled to the original scales. The average of attitude is 3.143, deep 3.68, strategic 3.121 and surface 2.787. The variable points shows the number of points the student achieved in an exam, and the average was 22.72.

For my model I chose the variables age, attitude and stra as explanatory variables and the target variable is points. A summary of the fitted model is shown below.

my_model <- lm(points ~ age + attitude + stra, data = mydata)
summary(my_model)
## 
## Call:
## lm(formula = points ~ age + attitude + stra, data = mydata)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -18.1149  -3.2003   0.3303   3.4129  10.7599 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 10.89543    2.64834   4.114 6.17e-05 ***
## age         -0.08822    0.05302  -1.664   0.0981 .  
## attitude     3.48077    0.56220   6.191 4.72e-09 ***
## stra         1.00371    0.53434   1.878   0.0621 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.26 on 162 degrees of freedom
## Multiple R-squared:  0.2182, Adjusted R-squared:  0.2037 
## F-statistic: 15.07 on 3 and 162 DF,  p-value: 1.07e-08

The p-values for the coefficients indicate whether these relationships are statistically significant. The model above shows the variables with their significance levels. Age and strategic have a significance level of approximately 0.1 and 0.06 respectively, which, while being larger than the more or less generally accepted level of 0.05, still shows some significance. However, as keeping variables that are not statistically very significant can reduce the model’s precision, I will therefore only include attitude (with its significance code of 0) and stra (with the significance level that is very close to 0.05) in my re-fitted model. The new model is shown below.

my_model <- lm(points ~ attitude + stra, data = mydata)
summary(my_model)
## 
## Call:
## lm(formula = points ~ attitude + stra, data = mydata)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.6436  -3.3113   0.5575   3.7928  10.9295 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   8.9729     2.3959   3.745  0.00025 ***
## attitude      3.4658     0.5652   6.132 6.31e-09 ***
## stra          0.9137     0.5345   1.709  0.08927 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared:  0.2048, Adjusted R-squared:  0.1951 
## F-statistic: 20.99 on 2 and 163 DF,  p-value: 7.734e-09

The relationship between the target variable points and the explanatory variable attitude is a strong one as the model shows a significance code of 0. The relationship between points and strategic is also fairly strong. To explain how well my model fits the data, assessment of R-squared is required. R-squared indicates the percentage of the variance in the dependent variable points that the independent (explanatory) variables explain collectively. In this case the R-squared value is relatively low at approximately 20%, indicating that the fit is not very good. In fact, the multiple R-squared value was higher at approximately 22% in the model that I previously rejected.

The multiple regression model makes the following assumptions:
1. The relationship between the independent and dependent variables is linear.
2. The errors between observed and predicted values should be normally distributed.
3. The data has no multicollinearity (this happens when the independent variables correlate too highly with each other).
4. Homoscedasticity, a.k.a. homogeneity of variance.

The validity of these model assumptions can be explored by analyzing the residuals of the model, as in the plots shown below.

To produce the diagnostic plots Residuals vs Fitted values (plot 1), Normal QQ-plot (plot 2) and Residuals vs Leverage (plot 5):

par(mfrow = c(2,2)) 
plot(my_model, which = c(1:2, 5))

The normality assumption (assumption 2. above) can be explored by analyzing the Q-Q plot. The fit to the normality assumption is reasonably good in this case.

The homoscedasticity assumption (assumption 4.) implies that the size of the errors should not depend on the explanatory variables. This can be explored with the residuals vs fitted plot. As we can see from the first plot, there is a reasonably random placement of plots with no discernible pattern, and therefore the assumption is valid.

The last plot, residuals vs leverage, can help determine which observations have an unusually high impact on the model (assumption 3.). As we can observe from the plot, there is no single outlier that stands out, and therefore we can conclude that this assumption is valid.

After this graphical exploration we can conclude that the validity of the model assumptions is good.


3. Logistic regression

Moving on from linear regression, this week we will look at logictic regression. It is a method well suited for predicting and classifying data through probabilities. This week we will look at the concept of odds ratio (OR). We will also take a look at cross-validation, and learn about splitting the data into a training set and a test set.

Below are all the codes, my interpretations and explanations for this week’s data analysis exercises.

The data for this exercise was downloaded from the UC Irvine Machine Learning Repository.

The data are two student performance data sets that approach student achievement in secondary education of two schools in Portugal. The data variables include student grades, as well as demographic, social and school related features. The data was collated using school reports and questionnaires. The two datasets that analyze the performance in two distinct subjects: Mathematics (mat) and Portuguese language (por).

In the data wrangling exercise (which can be found in my GitHub repository) performed prior to this report, the two data sets were combined using a number of variables as student identifiers. Only the students present in both data sets were kept. Next, the average of the answers related to weekday and weekend alcohol consumption was taken to create a new variable ‘alc_use’. Then ‘alc_use’ was used to create a new logical column ‘high_use’ (TRUE for students whose ‘alc_use’ is greater than 2 and FALSE otherwise).

Citation:
P. Cortez and A. Silva. Using Data Mining to Predict Secondary School Student Performance. In A. Brito and J. Teixeira Eds., Proceedings of 5th FUture BUsiness TEChnology Conference (FUBUTEC 2008) pp. 5-12, Porto, Portugal, April, 2008, EUROSIS, ISBN 978-9077381-39-7.

Part 1 - General Housekeeping

For the first part of this analysis I created a new RMarkdown file and save it as an empty file named ‘chapter3.Rmd’. Then I included this file as a child file in my ‘index.Rmd’ file, and as a result you are now reading this.

Part 2 - The Data Set

I read the joined student alcohol consumption data into R from my local folder.

alc <- read.table("create_alc.csv", sep = "," , header=TRUE)

Next, I printed out the names of the variables in the data.

colnames(alc)
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "nursery"    "internet"   "guardian"   "traveltime"
## [16] "studytime"  "failures"   "schoolsup"  "famsup"     "paid"      
## [21] "activities" "higher"     "romantic"   "famrel"     "freetime"  
## [26] "goout"      "Dalc"       "Walc"       "health"     "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"

The data set consists of 382 observations of 35 variables. The variables are a combination of numeric, nominal and binary attributes.

Part 3 - Choosing Variables of Interest

For this part, in order to study the relationship between high or low alcohol consumption, I chose the following 4 variables from the data, including my personal hypothesis for each variable:
1. age - I hypothesize that higher age will correlate with high alcohol consumption.
2. failures - indicating the number of past class failures. My hypothesis is that a high number of failures correlates with high alcohol use.
3. famrel - indicator for the quality of family relations. Better family relations will mean lower alcohol consumption and conversely worse relations will correlate with higher alcohol use.
4. absences - number of school absences. A higher number of absences will relate to high alcohol consumption.

Part 4 - Exploring the Variables

Next, I accessed some R packages that I will need shortly, and I also created a mini-dataset called alc_dataset. This mini-dataset will allow me to represent just the data I chose for my analysis.

Accessing packages dplyr and ggplot2:

library(dplyr); library(ggplot2)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union

Creating alc_dataset:

alc_use <- alc$alc_use
high_use <- alc$high_use
age <- alc$age
failures <- alc$failures
famrel <- alc$famrel
absences <- alc$absences
alc_dataset <- data.frame(alc_use, age, failures, famrel, absences)
summary(alc_dataset)
##     alc_use           age           failures          famrel     
##  Min.   :1.000   Min.   :15.00   Min.   :0.0000   Min.   :1.000  
##  1st Qu.:1.000   1st Qu.:16.00   1st Qu.:0.0000   1st Qu.:4.000  
##  Median :1.500   Median :17.00   Median :0.0000   Median :4.000  
##  Mean   :1.889   Mean   :16.59   Mean   :0.2016   Mean   :3.937  
##  3rd Qu.:2.500   3rd Qu.:17.00   3rd Qu.:0.0000   3rd Qu.:5.000  
##  Max.   :5.000   Max.   :22.00   Max.   :3.0000   Max.   :5.000  
##     absences   
##  Min.   : 0.0  
##  1st Qu.: 1.0  
##  Median : 3.0  
##  Mean   : 4.5  
##  3rd Qu.: 6.0  
##  Max.   :45.0

Above is the initial numeric summary for all the variables that I chose. This does not tell us anything when it comes to the relationships between the variables, but it is a nice neat summary of the distribution of each variable.

Let’s look at a simple plot that illustrates high_use.

ggplot(alc, aes(x = high_use, fill = sex)) + 
  geom_bar(position = "fill")

Unsurprisingly, most high users of alcohol in this dataset were male.

Before we get into the logistic regression in part 5 below, let’s look at some numerical (summaries by group using the pipe operator) and graphical (box plot) representations of the data. Here, I will compare the variable high_use with variables age, failures, famrel, and absences.

Variable age

alc %>% group_by(sex, high_use) %>% summarise(count = n(), mean_age = mean(age))
## # A tibble: 4 x 4
## # Groups:   sex [?]
##   sex   high_use count mean_age
##   <fct> <lgl>    <int>    <dbl>
## 1 F     FALSE      156     16.6
## 2 F     TRUE        42     16.5
## 3 M     FALSE      112     16.3
## 4 M     TRUE        72     17.0
# barplot
g1a <- ggplot(data=alc, aes(x=age, y=consumption)) +
geom_bar(stat="identity")

# initialise a plot of high_use and age
g1 <- ggplot(alc, aes(x = high_use, y = age, col = sex))
 
 # define the plot as a boxplot and draw it
g1 + geom_boxplot() + ggtitle("Student ages by alcohol consumption and sex")

The numerical representation and the plot above quite clearly shows that my first hypothesis, concerning age, was largely rubbish. Moving on!

Variable failures

alc %>% group_by(sex, high_use) %>% summarise(count = n(), mean_failures = mean(failures))
## # A tibble: 4 x 4
## # Groups:   sex [?]
##   sex   high_use count mean_failures
##   <fct> <lgl>    <int>         <dbl>
## 1 F     FALSE      156         0.115
## 2 F     TRUE        42         0.286
## 3 M     FALSE      112         0.179
## 4 M     TRUE        72         0.375
# initialise a plot of high_use and failures
g2 <- ggplot(alc, aes(x = high_use, y = failures, col = sex))
 
 # define the plot as a boxplot and draw it
 g2 + geom_boxplot() + ggtitle("Student failures by alcohol consumption and sex")

The analysis of failures shows that, regardless of the rather useless box plot, my hypothesis was in the right track. In both males and females, the mean failures of those having high alcohol use was considerably more than the mean failures of those not having high alcohol use. In males, the difference between mean failures of high and low alcohol users, at 0.196, is higher than the difference, 0.171, in females. This is starting to point toward the fact that perhaps I should have hypothesized that being male is a factor in high alcohol use.

Variable famrel

alc %>% group_by(sex, high_use) %>% summarise(count = n(), mean_famrel = mean(famrel))
## # A tibble: 4 x 4
## # Groups:   sex [?]
##   sex   high_use count mean_famrel
##   <fct> <lgl>    <int>       <dbl>
## 1 F     FALSE      156        3.91
## 2 F     TRUE        42        3.76
## 3 M     FALSE      112        4.13
## 4 M     TRUE        72        3.79
# initialise a plot of high_use and family relationships
g3 <- ggplot(alc, aes(x = high_use, y = famrel, col = sex))
 
 # define the plot as a boxplot and draw it
 g3 + geom_boxplot() + ggtitle("Student family relationships by alcohol consumption and sex")

The analysis of famrel also shows that my hypothesis was not incorrect. In both sexes, the mean family relationships score of those having high alcohol use was lower than the mean failures of those not having high alcohol use. Here, a difference between the sexes is also evident, albeit in a manner rather different from the variable failures. The mean family relationship scores for the high alcohol users in both sexes was about the same, whereas, rather interrestingly, the family relationships score was better for males in the low alcohol use group.

Variable absences

alc %>% group_by(sex, high_use) %>% summarise(count = n(), mean_absences = mean(absences))
## # A tibble: 4 x 4
## # Groups:   sex [?]
##   sex   high_use count mean_absences
##   <fct> <lgl>    <int>         <dbl>
## 1 F     FALSE      156          4.22
## 2 F     TRUE        42          6.79
## 3 M     FALSE      112          2.98
## 4 M     TRUE        72          6.12
# initialise a plot of high_use and absences
g4 <- ggplot(alc, aes(x = high_use, y = absences, col = sex))
 
 # define the plot as a boxplot and draw it
 g4 + geom_boxplot() + ggtitle("Student absences by alcohol consumption and sex")

The analysis of absences shows that my hypothesis was correct: a higher number of absences does indeed correlate with high alcohol consumption. Again, the differences between the sexes in striking. The mean absences for both male and female high alcohol users are around the same ballpark, but the mean absence figures for low alcohol users differs dramatically between the sexes: 2.982 for the males and 4.224 for the females. This not only indicates that females generally have more school absences, but also that in males the introduction of high alcohol use dramatically increases the mean number of absences.

Part 5 - Logistic Regression

Next, I will use logistic regression to explore the relationship between my chosen variables and the binary variable high_use. This is done by using glm() to fit a logictic regression model. I will also print a summary of the model and I will use coef() to print out the coefficients of my model.

model <- glm(high_use ~ age + failures + famrel + absences, data = alc, family = "binomial")
summary(model)
## 
## Call:
## glm(formula = high_use ~ age + failures + famrel + absences, 
##     family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.2478  -0.7937  -0.6783   1.1488   1.9908  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -2.73938    1.73621  -1.578 0.114614    
## age          0.13990    0.10248   1.365 0.172213    
## failures     0.43635    0.18747   2.328 0.019933 *  
## famrel      -0.23849    0.12558  -1.899 0.057560 .  
## absences     0.08040    0.02259   3.560 0.000371 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 434.98  on 377  degrees of freedom
## AIC: 444.98
## 
## Number of Fisher Scoring iterations: 4
coef(model)
## (Intercept)         age    failures      famrel    absences 
## -2.73937782  0.13989709  0.43634997 -0.23848933  0.08040375

In the fitted model, absences shows a significance level ***, or less than 0.0001 (0.01%) and it is therefore statistically significant. failures is also significant with * a significance level of around 2%. famrel is close to the generally accepted significance level of 5%, whereas, as previously observed, age is not statistically significant.

As I observed above, it seems that sex has a statistical significance, and I am fitting my model again. This time I will include sex and exclude age.

model <- glm(high_use ~ sex + failures + famrel + absences, data = alc, family = "binomial")
summary(model)
## 
## Call:
## glm(formula = high_use ~ sex + failures + famrel + absences, 
##     family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.1174  -0.8376  -0.5867   1.0091   2.1557  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -0.83877    0.53446  -1.569   0.1166    
## sexM         0.99120    0.24540   4.039 5.37e-05 ***
## failures     0.42458    0.18854   2.252   0.0243 *  
## famrel      -0.27632    0.12837  -2.153   0.0314 *  
## absences     0.09052    0.02252   4.020 5.83e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 419.78  on 377  degrees of freedom
## AIC: 429.78
## 
## Number of Fisher Scoring iterations: 4
coef(model)
## (Intercept)        sexM    failures      famrel    absences 
## -0.83876936  0.99119526  0.42458409 -0.27631940  0.09051704

As I suspected, the model now looks much better. All the variables have an acceptable level of significance, with the variable sexM (male gender) having a significance level on par with absences.

Now I will create the object OR (odds ratios) by using coef() on the model object to extract the coefficients of the model and then apply the exp function on the coefficients. The next step is to use confint() on the model to compute confidence intervals for the coefficients. I will exponentiate the values and assign the results to the object CI. Finally, I will combine cbind and print out the odds ratios and their confidence intervals.

OR <- coef(model) %>% exp
CI <- confint(model) %>% exp
## Waiting for profiling to be done...
cbind(OR, CI)
##                    OR     2.5 %    97.5 %
## (Intercept) 0.4322421 0.1489271 1.2204360
## sexM        2.6944531 1.6760023 4.3948359
## failures    1.5289544 1.0573611 2.2280778
## famrel      0.7585706 0.5887336 0.9756953
## absences    1.0947402 1.0496217 1.1468498

To be able to meaningfully interpret the coefficients of the model as odds ratios we need to know when odds ratios are used. They are used to compare the relative odds of the occurrence of the outcome of interest (in this case high alcohol use), given exposure to the variable of interest.

The odds ratios are interpreted as follows:

OR=1 Exposure does not affect odds of outcome OR>1 Exposure associated with higher odds of outcome OR<1 Exposure associated with lower odds of outcome

We therefore can see that exposure to famrel is associated with lower odds of outcome, whereas exposure to absences, failures, and especially sexM is associated with higher odds of the outcome high_use. sexM also has the widest confidence interval of all the variables.

If the confidence interval crosses 1 (contains the value of no effect), this implies that the observed effect is statistically not significant. This, as can be seen from the table above, is not the case for any of the variables.

As stated before, my hypothesis regarding the variable age was wrong, and as soon as it became apparent that a person being male was significant, I added the variable sexM to my model. The four variables in the fitted model all have a statistically significant relationship with high alcohol use in the following descending order: sexM, failures, absences, and famrel.

Part 6 - Exploring the Predictive Power of My Model

According to my logistic regression model, sexM, failures, famrel and absences all had a statistical relationship with high/low alcohol use. Therefore, I do not need to make any adjustments to my model at this stage - as I did so in the step above.

Below I explore the predictive power of my model by providing a 2x2 cross tabulation of predictions versus the actual values.

probabilities <- predict(model, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)
select(alc, failures, absences, famrel, sex, high_use, probability, prediction) %>% tail(10)
##     failures absences famrel sex high_use probability prediction
## 373        1        0      4   M    FALSE   0.3709210      FALSE
## 374        1        7      5   M     TRUE   0.4573619      FALSE
## 375        0        1      5   F    FALSE   0.1062293      FALSE
## 376        0        6      4   F    FALSE   0.1976662      FALSE
## 377        1        2      5   F    FALSE   0.1659304      FALSE
## 378        0        2      4   F    FALSE   0.1464134      FALSE
## 379        2        2      2   F    FALSE   0.4106677      FALSE
## 380        0        3      1   F    FALSE   0.3007902      FALSE
## 381        0        4      2   M     TRUE   0.4904650      FALSE
## 382        0        2      4   M     TRUE   0.3160860      FALSE
table(high_use = alc$high_use, prediction = alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   253   15
##    TRUE     82   32

Let’s also draw a plot to examine the predictions and create a cross table of high_use versus prediction.

g <- ggplot(alc, aes(x = probability, y = high_use, col = prediction))
g + geom_point()

table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table %>% addmargins
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.66230366 0.03926702 0.70157068
##    TRUE  0.21465969 0.08376963 0.29842932
##    Sum   0.87696335 0.12303665 1.00000000

The table above illustrates how accurate my predictions are. By adding the figures for high_use being FALSE while the model is predicting TRUE and high_use being TRUE while the model predicts FALSE we can calculate the percentage of all cases where my model would be wrong: 19.9%. Therefore, the model would be right roughly 80% of the time, which is much better a result than we would achieve by simple guessing with equal weights. The greatest delta between the number of observed and expected respondents is FALSE and FALSE at 63.9%, and my model comfortably outperforms this too, indicating that my model has some validity.

The next task at hand is accuracy and loss functions. I will compute the total proportion of inaccurately classified individuals, in other words the training error. The loss function loss_func is defined and called to compute the average number of wrong predictions in the (training) data.

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2539267

The training error for this model is approximately 0.254, while the test error is calculated in the following section.

Part 7 - Bonus: 10-fold Cross-validation

As a bonus I will perform a 10-old cross-validation on my model. This tells us the testing errors, as opposed to the training errors above.

# K-fold cross-validation
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = model, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2670157

My model has a very slightly better test set performance than the model that was introduced in DataCamp: 0.2617801 versus 0.2643979. The testing error of my model is higher than my training error. A lower training error is expected when a method easily overfits to the training data.

Part 8 - Super-Bonus: Comparing the Performance of Different Logistic Regression Models

As a super-bonus I will perform cross-validation to compare the performance of different logistic regression models. I will start with the following 20 predictors: sex, address, famsize, Pstatus, Medu, Fedu, Mjob, Fjob, guardian, studytime, failures, schoolsup, famsup, activities, higher, romantic, famrel, goout, health, and absences.

alc_use <- alc$alc_use
high_use <- alc$high_use
sex <- alc$sex
address <- alc$address
famsize <- alc$famsize
Pstatus <- alc$Pstatus
Medu <- alc$Medu
Fedu <- alc$Fedu
Mjob <- alc$Mjob
Fjob <- alc$Fjob
guardian <- alc$guardian
studytime <- alc$studytime
failures <- alc$failures
schoolsup <- alc$schoolsup
famsup <- alc$famsup
activities <- alc$activities
higher <- alc$higher
romantic <- alc$romantic
famrel <- alc$famrel
goout <- alc$goout
health <- alc$health
absences <- alc$absences

model_20 <- glm(high_use ~ sex + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + guardian + studytime + failures + schoolsup + famsup + activities + higher + romantic + famrel + goout + health + absences, data = alc, family = "binomial")

probabilities <- predict(model_20, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
t_er_20 <- loss_func(class = alc$high_use, prob = alc$probability)
t_er_20
## [1] 0.1989529
cv_20 <- cv.glm(data = alc, cost = loss_func, glmfit = model_20, K = 10)
ts_er_20 <- cv_20$delta[1]
ts_er_20
## [1] 0.2408377

Let’s see what happens when we reduce the number of predictors to 15.

I will leave out higher, famsup, schoolsup, Medu and Fedu from the next model, leaving me with 15 predictors.

model_15 <- glm(high_use ~ sex + address + famsize + Pstatus + Mjob + Fjob + guardian + studytime + failures + activities + romantic + famrel + goout + health + absences, data = alc, family = "binomial")

probabilities <- predict(model_15, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
t_er_15 <- loss_func(class = alc$high_use, prob = alc$probability)
t_er_15
## [1] 0.1937173
cv_15 <- cv.glm(data = alc, cost = loss_func, glmfit = model_15, K = 10)
ts_er_15 <- cv_15$delta[1]
ts_er_15
## [1] 0.2382199

Now I will reduce the number of predictors to 10.

I will leave out famsize, Pstatus, Mjob, Fjob and romantic from the next model, leaving me with 10 predictors.

model_10 <- glm(high_use ~ sex + address + guardian + studytime + failures + activities + famrel + goout + health + absences, data = alc, family = "binomial")

probabilities <- predict(model_10, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
t_er_10 <- loss_func(class = alc$high_use, prob = alc$probability)
t_er_10
## [1] 0.2041885
cv_10 <- cv.glm(data = alc, cost = loss_func, glmfit = model_10, K = 10)
ts_er_10 <- cv_10$delta[1]
ts_er_10
## [1] 0.2198953

Now I will reduce the number of predictors to 6.

I will take out 4 predictors (activities, guardian, failures and health).

model_6 <- glm(high_use ~ sex + address + studytime + famrel + goout + absences, data = alc, family = "binomial")

probabilities <- predict(model_6, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
t_er_6 <- loss_func(class = alc$high_use, prob = alc$probability)
t_er_6
## [1] 0.2172775
cv_6 <- cv.glm(data = alc, cost = loss_func, glmfit = model_6, K = 10)
ts_er_6 <- cv_6$delta[1]
ts_er_6
## [1] 0.2277487

I will now take out the predictors one by one, until I reach 1.

model_5 <- glm(high_use ~ sex + studytime + famrel + goout + absences, data = alc, family = "binomial")

probabilities <- predict(model_5, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
t_er_5 <- loss_func(class = alc$high_use, prob = alc$probability)
t_er_5
## [1] 0.2172775
cv_5 <- cv.glm(data = alc, cost = loss_func, glmfit = model_5, K = 10)
ts_er_5 <- cv_5$delta[1]
ts_er_5
## [1] 0.2251309
model_4 <- glm(high_use ~ sex + famrel + goout + absences, data = alc, family = "binomial")

probabilities <- predict(model_4, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
t_er_4 <- loss_func(class = alc$high_use, prob = alc$probability)
t_er_4
## [1] 0.2041885
cv_4 <- cv.glm(data = alc, cost = loss_func, glmfit = model_4, K = 10)
ts_er_4 <- cv_4$delta[1]
ts_er_4
## [1] 0.2172775
model_3 <- glm(high_use ~ sex + goout + absences, data = alc, family = "binomial")

probabilities <- predict(model_3, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
t_er_3 <- loss_func(class = alc$high_use, prob = alc$probability)
t_er_3
## [1] 0.2094241
cv_3 <- cv.glm(data = alc, cost = loss_func, glmfit = model_3, K = 10)
ts_er_3 <- cv_3$delta[1]
ts_er_3
## [1] 0.2172775
model_2 <- glm(high_use ~ sex + absences, data = alc, family = "binomial")

probabilities <- predict(model_2, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
t_er_2 <- loss_func(class = alc$high_use, prob = alc$probability)
t_er_2
## [1] 0.2565445
cv_2 <- cv.glm(data = alc, cost = loss_func, glmfit = model_2, K = 10)
ts_er_2 <- cv_2$delta[1]
ts_er_2
## [1] 0.2643979
model_1 <- glm(high_use ~ sex, data = alc, family = "binomial")

probabilities <- predict(model_1, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
t_er_1 <- loss_func(class = alc$high_use, prob = alc$probability)
t_er_1
## [1] 0.2984293
cv_1 <- cv.glm(data = alc, cost = loss_func, glmfit = model_1, K = 10)
ts_er_1 <- cv_1$delta[1]
ts_er_1
## [1] 0.2984293

Finally, I drew a simple graph that displays the trends of both training and testing errors by the number of predictors in the model.

t_er<-c(t_er_1,t_er_2,t_er_3,t_er_4,t_er_5,t_er_6,t_er_10,t_er_15,t_er_20)

ts_er<-c(ts_er_1,ts_er_2,ts_er_3,ts_er_4,ts_er_5,ts_er_6,ts_er_10,ts_er_15,ts_er_20)

plot(t_er, type="o", col="blue", xlab="Number of predictors / model complexity", ylab="Prediction error")
lines(ts_er, type="o", col="red")
axis(1, at=1:9, lab=c("1","2","3","4","5","6","10","15","20"))
title(main="Training and test errors by number of predictors")
legend("topright", c("training error","test error"),lty=c(1,1), lwd=c(2.5,2.5),col=c("blue","red")) 

Overfitting will occur as soon as the test error starts to increase while the training error is still decreasing. In my model the sweet spot is at 10 predictors, anything more than that and the model will overfit, possibly as it is too complex (possibly, because there are many more reasons for overfitting).

To the left of the graph are situations where the model is low in complexity and has high bias and low variance. To the right of the graph are situations where the model is high in complexity and has low bias and high variance.

That’s all for this week - I’m already looking forward to next week’s tasks!


4. Clustering and Classification

The topics of this week - clutering and classification - are visual tools for the exploration of statistical data. Clustering means that some data points are closer to each other than some other points: they are clustered. Once we have clustered successfully, we can try to classify new observations to these clusters, thus validating the results of clustering.

Below are all the codes, my interpretations and explanations for this week’s data analysis exercises.

Part 1 - General Housekeeping

As in last week, I created a new RMarkdown file and save it as an empty file named ‘chapter4.Rmd’. Then I included this file as a child file in my ‘index.Rmd’ file, and as a result you are now reading this.

I also accessed some packages that I might need later.

library(ggplot2); library(GGally); library(corrplot); library(tidyr); library(dplyr)
## 
## Attaching package: 'GGally'
## The following object is masked from 'package:dplyr':
## 
##     nasa
## corrplot 0.84 loaded

Part 2 - Loading the Boston data

To load and explore the Boston data, I first needed to access the MASS package, and then load the “Boston” data. Below is the structure and summary of the data, and also a matrix of the variables.

# access the MASS package
library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
# load the data
data("Boston")

# explore the dataset
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...

The Boston data set is a data frame with 506 observations of 14 variables. The data examines housing values in the suburbs of Boston. The variables are all numeric, with two variables being integer types.

A link to details about the Boston dataset can be seen here.

Here is the source of the data:

Harrison, D. and Rubinfeld, D.L. (1978) Hedonic prices and the demand for clean air. < em >J. Environ. Economics and Management < b >5, 81-102.

Belsley D.A., Kuh, E. and Welsch, R.E. (1980) < em >Regression Diagnostics. Identifying Influential Data and Sources of Collinearity. New York: Wiley.

Part 3 - Exploring the Boston data

Below is a graphical overview of the data and the summaries of the variables in the data.

summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00
pairs(Boston)

The variable of interest here, as the data looks at housing values, is medv. It shows the median value of owner-occupied homes in $1000s.

The values for medvvary from 5 to 50, witha mean value of 22.53. medv correlates quite well with all the variables (apart from the Charles River dummy variable), but most strongly with the average number of rooms per dwelling rm(0.695) and lower status of the population lstat(obviously with a negative correlation of -0.738).

Let’s look at a way of plotting the correlations, this time from the corrplotpackage. Also, the funstion cor() can be used to craete a correlation matrix of the data.

# calculate the correlation matrix and round it
cor_matrix<-cor(Boston) %>% round(digits = 2)

# print the correlation matrix
cor_matrix
##          crim    zn indus  chas   nox    rm   age   dis   rad   tax
## crim     1.00 -0.20  0.41 -0.06  0.42 -0.22  0.35 -0.38  0.63  0.58
## zn      -0.20  1.00 -0.53 -0.04 -0.52  0.31 -0.57  0.66 -0.31 -0.31
## indus    0.41 -0.53  1.00  0.06  0.76 -0.39  0.64 -0.71  0.60  0.72
## chas    -0.06 -0.04  0.06  1.00  0.09  0.09  0.09 -0.10 -0.01 -0.04
## nox      0.42 -0.52  0.76  0.09  1.00 -0.30  0.73 -0.77  0.61  0.67
## rm      -0.22  0.31 -0.39  0.09 -0.30  1.00 -0.24  0.21 -0.21 -0.29
## age      0.35 -0.57  0.64  0.09  0.73 -0.24  1.00 -0.75  0.46  0.51
## dis     -0.38  0.66 -0.71 -0.10 -0.77  0.21 -0.75  1.00 -0.49 -0.53
## rad      0.63 -0.31  0.60 -0.01  0.61 -0.21  0.46 -0.49  1.00  0.91
## tax      0.58 -0.31  0.72 -0.04  0.67 -0.29  0.51 -0.53  0.91  1.00
## ptratio  0.29 -0.39  0.38 -0.12  0.19 -0.36  0.26 -0.23  0.46  0.46
## black   -0.39  0.18 -0.36  0.05 -0.38  0.13 -0.27  0.29 -0.44 -0.44
## lstat    0.46 -0.41  0.60 -0.05  0.59 -0.61  0.60 -0.50  0.49  0.54
## medv    -0.39  0.36 -0.48  0.18 -0.43  0.70 -0.38  0.25 -0.38 -0.47
##         ptratio black lstat  medv
## crim       0.29 -0.39  0.46 -0.39
## zn        -0.39  0.18 -0.41  0.36
## indus      0.38 -0.36  0.60 -0.48
## chas      -0.12  0.05 -0.05  0.18
## nox        0.19 -0.38  0.59 -0.43
## rm        -0.36  0.13 -0.61  0.70
## age        0.26 -0.27  0.60 -0.38
## dis       -0.23  0.29 -0.50  0.25
## rad        0.46 -0.44  0.49 -0.38
## tax        0.46 -0.44  0.54 -0.47
## ptratio    1.00 -0.18  0.37 -0.51
## black     -0.18  1.00 -0.37  0.33
## lstat      0.37 -0.37  1.00 -0.74
## medv      -0.51  0.33 -0.74  1.00
# visualize the correlation matrix
corrplot(cor_matrix, method="square", type="lower", cl.pos="b", tl.pos="d", tl.cex = 0.6)

The corrplotis a really neat visualization method. Positive correlations are displayed in blue and negative correlations in red color. Color intensity and the size of the square are proportional to the correlation coefficients. As we can see from above, this gives us a quick visual way of confirming what we knew from before lstathas a strong negative correlation with medvand rm has a strong positive correlation with medv. At a glance, we can see that there are a handful of other strong positive and negative correlations in the matrix too.

Part 4 - Creating Training and Test Sets

Now we need to scale the data. This is done by subtracting the column means from the corresponding columns and dividing the difference with standard deviation. \[scaled(x)=\frac{x-mean(x)}{sd(x)}\] The dataset is then standardized, and below I have printed out a summary of the scaled data.

# center and standardize variables
boston_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)

How did the variables change? They have changed per the scaled(x) equation above. It is worth noting that in this new summary, all the mean values are 0.

We can create a categorical variable from a continuous one, and we will do that with the crime rate in the Boston dataset (from the scaled crime rate). We will cut the crimvariable by quantiles to get the high, low and middle rates of crime into their own categories. The quantiles are used as the break points in the new categorical variable. The old crime rate variable is dropped from the dataset.

# summary of the scaled crime rate
summary(boston_scaled$crim)
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -0.419367 -0.410563 -0.390280  0.000000  0.007389  9.924110
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))

# look at the table of the new factor crime
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

Next, we will divide the dataset to training and test sets, so that 80% of the data belongs to the training set.

# number of rows in the Boston dataset 
n <- nrow(boston_scaled)

# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]

# create test set 
test <- boston_scaled[-ind,]

# save the correct classes from test data
correct_classes <- test$crime

# remove the crime variable from test data
test <- dplyr::select(test, -crime)

Part 5 - Linear Discriminant Analysis

Next we will look at linear discriminant analysis. It is a classification and dimension reduction method that is closely related to logictic regression (from last week) and principal component analysis (next week). It can be used to find variables that discriminate or separate the classes best, or it can be used to predict the classes of new data.

First, we will fit the linear discriminant analysis on the training set created in the previous step. We will use the categorical crime rate as the target variable and all the other variables are predictor variables. Then we will draw the LDA (bi)plot.

# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2425743 0.2623762 0.2673267 0.2277228 
## 
## Group means:
##                   zn      indus        chas        nox         rm
## low       0.89357862 -0.9037946 -0.19198008 -0.8747187  0.4788922
## med_low  -0.08921092 -0.2591986 -0.01233188 -0.5462556 -0.1384345
## med_high -0.39036962  0.1937838  0.16512651  0.3808241  0.1310772
## high     -0.48724019  1.0149946 -0.05835623  1.0675664 -0.3969687
##                 age        dis        rad        tax     ptratio
## low      -0.8440695  0.7752555 -0.6959263 -0.7264336 -0.45409208
## med_low  -0.3005717  0.3573140 -0.5409032 -0.4461891 -0.03131614
## med_high  0.3995658 -0.3716799 -0.3831796 -0.2920893 -0.30536064
## high      0.8262783 -0.8674907  1.6596029  1.5294129  0.80577843
##               black       lstat        medv
## low       0.3883654 -0.78959878  0.55972031
## med_low   0.3166954 -0.10742378 -0.02674906
## med_high  0.1154161 -0.03456903  0.20161028
## high     -0.9559958  0.88719635 -0.71622046
## 
## Coefficients of linear discriminants:
##                 LD1         LD2         LD3
## zn       0.10641598  0.68874185 -0.84891649
## indus    0.01702122 -0.27564312  0.31374058
## chas    -0.04187259 -0.12254095  0.17745050
## nox      0.33052412 -0.82714284 -1.24186721
## rm      -0.09704495 -0.06976809 -0.21238910
## age      0.28798002 -0.30139401 -0.02289067
## dis     -0.08190021 -0.35048824  0.35854380
## rad      3.22376307  0.76477057 -0.18372050
## tax     -0.07331171  0.13778011  0.58688055
## ptratio  0.17035284  0.05264548 -0.16536143
## black   -0.19481391 -0.02153043  0.11080898
## lstat    0.19617631 -0.20735706  0.45741195
## medv     0.19293076 -0.36916444 -0.12203424
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9486 0.0377 0.0137

Now, I will try to explain the above LDA model output with the help of a very useful video that can be found here. Look for the part starting at 02:46.

First at the top we have the prior probabilities of groups. These are simply the number of observations in each class divided by the number of observations in the whole dataset.

The group means have the value for every variable and for every class. The means differ between the classes.

Then there are the coefficients of the linear discriminants. There is a coefficient for each of the variables. We have four target variables, and therefore three linear discriminants.

The proportion of trace is the between-group variance. In our model, Linear Discriminant 1 explains almost 95% of the between-group variance.

# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1.5)

Part 6 - Predict with the LDA Model

The data was split earlier so that we now have the test set and the correct class labels. Now we will predict the classes with the LDA model on the test data. Based on the trained model, LDA calculates the probabilites for the new observation for belonging in each of the classes. The observation is classified to the class of the highest probability. The probabilities are estimated using Bayes Theorem.

The results are cross tabulated with the crime categories from the test set.

Save the crime categories from the test set and then remove the categorical crime variable from the test dataset

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
tt <- table(correct = correct_classes, predicted = lda.pred$class)
tt
##           predicted
## correct    low med_low med_high high
##   low       18      10        1    0
##   med_low    5      12        3    0
##   med_high   0       8       10    0
##   high       0       0        1   34
error = sum(tt[row(tt) != col(tt)]) / sum(tt)
error
## [1] 0.2745098
summary(test)
##        zn               indus               chas         
##  Min.   :-0.48724   Min.   :-1.44552   Min.   :-0.27233  
##  1st Qu.:-0.48724   1st Qu.:-0.86683   1st Qu.:-0.27233  
##  Median :-0.48724   Median :-0.18028   Median :-0.27233  
##  Mean   : 0.08698   Mean   : 0.01705   Mean   : 0.07506  
##  3rd Qu.: 0.37030   3rd Qu.: 1.01499   3rd Qu.:-0.27233  
##  Max.   : 3.37170   Max.   : 2.11552   Max.   : 3.66477  
##       nox                 rm                age          
##  Min.   :-1.46443   Min.   :-3.05520   Min.   :-2.20168  
##  1st Qu.:-0.92939   1st Qu.:-0.60756   1st Qu.:-1.07375  
##  Median :-0.14407   Median :-0.09413   Median : 0.31884  
##  Mean   : 0.04196   Mean   :-0.09699   Mean   :-0.04501  
##  3rd Qu.: 0.95838   3rd Qu.: 0.40970   3rd Qu.: 0.90679  
##  Max.   : 2.72965   Max.   : 3.00785   Max.   : 1.11639  
##       dis                rad               tax          
##  Min.   :-1.26582   Min.   :-0.9819   Min.   :-1.31269  
##  1st Qu.:-0.76823   1st Qu.:-0.6373   1st Qu.:-0.78313  
##  Median :-0.31478   Median :-0.5225   Median :-0.43751  
##  Mean   : 0.05981   Mean   : 0.1396   Mean   : 0.09143  
##  3rd Qu.: 0.68648   3rd Qu.: 1.6596   3rd Qu.: 1.52941  
##  Max.   : 3.95660   Max.   : 1.6596   Max.   : 1.52941  
##     ptratio             black              lstat         
##  Min.   :-2.70470   Min.   :-3.83367   Min.   :-1.32936  
##  1st Qu.:-0.39518   1st Qu.: 0.26070   1st Qu.:-0.72581  
##  Median : 0.34387   Median : 0.38793   Median :-0.06414  
##  Mean   : 0.06537   Mean   : 0.03782   Mean   : 0.10666  
##  3rd Qu.: 0.80578   3rd Qu.: 0.44062   3rd Qu.: 0.64443  
##  Max.   : 1.63721   Max.   : 0.44062   Max.   : 3.54526  
##       medv         
##  Min.   :-1.76499  
##  1st Qu.:-0.75380  
##  Median :-0.25908  
##  Mean   :-0.07744  
##  3rd Qu.: 0.26826  
##  Max.   : 2.98650

As the cross tabulation shows, the LDA model performs exceptionally well only for high when predicting on new (test) data. For rest of the classes, the predictions are roughly 50% - 60% correct. Overall, the error rate is 33%. It is worth noting here that because of the way that the observations are divided randomly, we will see a slightly different table every time we run the function.

Part 7 - K-means Algorithm

Now we will look at clustering, and we will start with distance measures. First, we need to reload the Boston dataset and standardize the dataset.

library(MASS)
data('Boston')

Now we will scale the Bostonvariables to get comparable distances.

# center and standardize variables
boston_K_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_K_scaled)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865
# class of the boston_scaled object
class(boston_K_scaled)
## [1] "matrix"
# change the object to data frame
boston_K_scaled <- as.data.frame(boston_K_scaled)

Now we will calculate the distances between the observations. We will use the dist()function for this, and by default it will use Euclidean distance measure to create a distance matrix. We will also look at the Manhattan method.

# euclidean distance matrix
dist_eu <- dist(boston_K_scaled)

# look at the summary of the distances
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
# manhattan distance matrix
dist_man <- dist(boston_K_scaled, method = 'manhattan')

# look at the summary of the distances
summary(dist_man)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2662  8.4832 12.6090 13.5488 17.7568 48.8618

K-means is one of the best known clustering methods, and it is an unsupervised one. It assigns obsrvations to clusters based on the similarities of the objects. We will next run a k-means algorithm on the dataset. We will use 4 centers to begin with. This could in theory be any number though, hence the K.

# k-means clustering
km <-kmeans(boston_K_scaled, centers = 4)

# plot the Boston dataset with clusters
pairs(boston_K_scaled, col = km$cluster)

But now the question arises: what is the optimal number of clusters? In other words, we need to determine the K. There are more than one way to skin the proverbial cat here, but we will look at how the total of within cluster sum of squares or WCSS behaves when we change the number of clusters. The WCSS is calculated as follows:

\(WCSS=\sum^N_{i} (X_i-centroid)^2\)

We will achieve this by plotting the number of clusters and the total WCSS. Once the WCSS drops radically, we have the optimal number of clusters. So, let’s see where we get our radical drop!

set.seed(123) #K-means might produce different results every time, because it randomly assigns the initial cluster centers. This function is used to deal with that.

# determine the number of clusters
k_max <- 10

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(boston_K_scaled, k)$tot.withinss})

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')

In this case, two clusters seems optimal. We will therefore run kmeans()again with two clusters and visualize the results. I will split the pairs plot into two separate ones, for more readable results. I was going to use ggpairs but drawing the plot took soooooo long!

# k-means clustering
km <-kmeans(boston_K_scaled, centers = 2, nstart = 20)
km
## K-means clustering with 2 clusters of sizes 177, 329
## 
## Cluster means:
##         crim         zn      indus         chas        nox         rm
## 1  0.7238295 -0.4872402  1.1425514 -0.005407018  1.0824279 -0.4547830
## 2 -0.3894158  0.2621323 -0.6146857  0.002908943 -0.5823397  0.2446705
##          age        dis        rad        tax    ptratio      black
## 1  0.8051309 -0.8439539  1.0834228  1.1693521  0.5471636 -0.6101842
## 2 -0.4331555  0.4540421 -0.5828749 -0.6291043 -0.2943707  0.3282754
##        lstat       medv
## 1  0.8421083 -0.6566834
## 2 -0.4530491  0.3532917
## 
## Clustering vector:
##   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
##  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   1   2   2   2 
##  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
##  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
##  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
##  91  92  93  94  95  96  97  98  99 100 101 102 103 104 105 106 107 108 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   1   1   1   2 
## 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   2   2   1   2   2 
## 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 
##   2   2   2   1   2   1   2   2   1   1   2   2   2   2   2   2   2   2 
## 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   1   1   1   1 
## 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 
##   1   1   1   1   1   1   1   2   2   2   2   2   2   2   2   2   2   2 
## 505 506 
##   2   2 
## 
## Within cluster sum of squares by cluster:
## [1] 1890.637 2686.045
##  (between_SS / total_SS =  35.3 %)
## 
## Available components:
## 
## [1] "cluster"      "centers"      "totss"        "withinss"    
## [5] "tot.withinss" "betweenss"    "size"         "iter"        
## [9] "ifault"
# plot the Boston dataset with clusters
pairs(boston_K_scaled[1:5], col = km$cluster)

pairs(boston_K_scaled[6:10], col = km$cluster)

The first cluster is composed of 177 observations, and the second of 329. Only 35.3% of the total variance in the data set is explained by the clustering, and this indicates a poor fit.

Bonus

Perform k-means on the original Boston data with some reasonable number of clusters (> 2) (I did 3). Remember to standardize the dataset.

library(MASS)
data('Boston')
# center and standardize variables
boston_K2_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_K2_scaled)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865
# class of the boston_scaled object
class(boston_K2_scaled)
## [1] "matrix"
# change the object to data frame
boston_K2_scaled <- as.data.frame(boston_K2_scaled)

# kmeans with 3 clusters
km <-kmeans(boston_K2_scaled, centers = 3, nstart = 20)
km
## K-means clustering with 3 clusters of sizes 164, 236, 106
## 
## Cluster means:
##         crim         zn     indus        chas        nox          rm
## 1  0.8046456 -0.4872402  1.117990  0.01575144  1.1253988 -0.46443119
## 2 -0.3760908 -0.3417123 -0.296848  0.01127561 -0.3345884 -0.09228038
## 3 -0.4075892  1.5146367 -1.068814 -0.04947434 -0.9962503  0.92400834
##           age         dis        rad        tax     ptratio      black
## 1  0.79737580 -0.85425848  1.2219249  1.2954050  0.60580719 -0.6407268
## 2 -0.02966623  0.05695857 -0.5803944 -0.6030198 -0.08691245  0.2863040
## 3 -1.16762641  1.19486951 -0.5983266 -0.6616391 -0.74378342  0.3538816
##        lstat        medv
## 1  0.8719904 -0.68418954
## 2 -0.1801190  0.03577844
## 3 -0.9480974  0.97889973
## 
## Clustering vector:
##   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18 
##   2   2   2   3   3   2   2   2   2   2   2   2   2   2   2   2   2   2 
##  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
##  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54 
##   2   2   2   3   3   3   2   2   2   2   2   2   2   2   2   2   3   3 
##  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72 
##   3   3   3   3   3   2   2   2   2   3   3   3   3   2   2   2   2   2 
##  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90 
##   2   2   2   2   2   2   2   2   3   2   3   2   2   2   2   2   2   2 
##  91  92  93  94  95  96  97  98  99 100 101 102 103 104 105 106 107 108 
##   2   2   2   2   2   2   2   3   3   2   2   2   2   2   2   2   2   2 
## 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 
##   2   1   1   1   2   2   2   1   1   1   1   1   1   1   1   1   1   1 
## 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   2   2   2   2   2 
## 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 
##   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 
##   2   2   2   2   2   2   3   3   3   3   3   3   3   3   3   3   3   3 
## 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 
##   3   3   3   3   3   3   3   2   2   2   2   2   2   2   2   2   2   2 
## 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 
##   2   2   2   2   2   2   2   2   3   3   2   2   3   2   2   2   3   3 
## 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 
##   2   2   2   2   3   3   3   2   3   3   2   2   3   2   3   3   3   3 
## 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 
##   3   3   3   3   3   3   2   2   2   3   3   2   2   2   2   3   3   2 
## 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 
##   2   2   2   3   3   3   3   3   3   3   3   3   3   3   3   3   3   3 
## 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 
##   3   3   3   3   3   2   2   2   2   2   3   3   3   3   3   3   3   2 
## 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 
##   3   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2   2 
## 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 
##   2   2   2   2   2   2   2   3   3   2   2   2   2   2   2   2   2   3 
## 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 
##   2   3   3   2   2   3   3   3   3   3   3   3   3   3   1   1   1   1 
## 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 
##   1   1   1   1   1   1   1   2   2   2   2   2   2   2   2   2   2   2 
## 505 506 
##   2   2 
## 
## Within cluster sum of squares by cluster:
## [1] 1717.6354 1394.9385  758.7999
##  (between_SS / total_SS =  45.2 %)
## 
## Available components:
## 
## [1] "cluster"      "centers"      "totss"        "withinss"    
## [5] "tot.withinss" "betweenss"    "size"         "iter"        
## [9] "ifault"

Then we will perform LDA using the clusters as target classes. I will include all the variables in the Boston data in the LDA model.

# access the cluster component in the kmeans data and change it to dataframe
cluster <- km$cluster
cluster <- as.data.frame(cluster)

# Add cluster as a column to boston_K2_scaled, ie. merge the two datasets
bonus_data <- merge(boston_K2_scaled, cluster)

# Create training and test sets

# number of rows in the new dataset 
n <- nrow(bonus_data)

# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
bonus_train <- bonus_data[ind,]

# create test set 
bonus_test <- bonus_data[-ind,]

# Perform LDA
bonus.lda.fit <- lda(cluster ~ ., data = bonus_train)
bonus.lda.fit
## Call:
## lda(cluster ~ ., data = bonus_train)
## 
## Prior probabilities of groups:
##         1         2         3 
## 0.3240231 0.4665036 0.2094733 
## 
## Group means:
##            crim            zn        indus          chas          nox
## 1  0.0016554458 -0.0032816859  0.003320093  0.0016763564  0.002492146
## 2 -0.0001328571  0.0001191164 -0.001702143  0.0033211011 -0.001328476
## 3  0.0027073814  0.0032566259 -0.002699963 -0.0003492724 -0.003361992
##              rm           age           dis           rad          tax
## 1 -0.0007286982  0.0021938001 -0.0047324208  0.0031777545  0.001790359
## 2  0.0017871229 -0.0004809195  0.0006946673 -0.0008323246 -0.001622667
## 3  0.0004078742  0.0003715072  0.0015532537 -0.0010301611 -0.002173667
##        ptratio         black        lstat          medv
## 1  0.002048162 -0.0019099538  0.001157256 -9.782444e-04
## 2 -0.001814217 -0.0004205507 -0.001427496  2.688129e-03
## 3  0.001119350  0.0029583281 -0.000103205  8.492379e-05
## 
## Coefficients of linear discriminants:
##                 LD1         LD2
## crim     0.55072337  0.01850667
## zn       0.44711852  0.76863579
## indus   -0.38179330  0.47500311
## chas    -0.08762222 -0.31334240
## nox     -0.22621821 -0.43228909
## rm       0.02071274  0.01911782
## age      0.71168450  0.04897999
## dis      0.44531388 -1.10905794
## rad     -0.61478741  1.02250037
## tax      0.32188018 -1.24967843
## ptratio  0.19325044  0.28918008
## black    0.36096685  0.36722420
## lstat    0.24613638 -0.41377554
## medv     0.10176888 -0.81631237
## 
## Proportion of trace:
##    LD1    LD2 
## 0.5699 0.4301

Next, we will visualize the results with a biplot (including arrows representing the relationships of the original variables to the LDA solution).

# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(bonus_train$cluster)

# plot the lda results
plot(bonus.lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(bonus.lda.fit, myscale = 1.5)

It appears that the most influencial linear separators for the clusters are LDA1 and LDA2. The variables rad and tax represent the most discrimination (their arrows are the longest). The angles between arrows represent the correlations between the variables (small angle = high positive correlation).

Super-Bonus

Run the code below for the (scaled) train data that you used to fit the LDA. The code creates a matrix product, which is a projection of the data points.

model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)

Next, install and access the plotly package. Create a 3D plot (Cool!) of the columns of the matrix product by typing the code below.

library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color=train$crime)

No kidding, this is cool!!

Sadly, this is where I had to stop this week, and I won’t be doing a comparison here. Thanks for reading to the end, and have a good week!


5. Dimensionality reduction techniques

Part 1. Overview of the Data

For clarity, here are the variables and their short explanations:
* Life.Exp = Life expectancy at birth
* Edu.Exp = Expected years of schooling
* GNI = Gross National Income
* Mat.Mor = Maternal mortality ratio
* Ado.Birth = Adolescent birth rate
* Parli.F = Percentage of female representatives in parliament
* Edu2.FM = Proportion of females with at least secondary education / Proportion of males with at least secondary education
* Labo.FM = Proportion of females in the labour force / Proportion of males in the labour force

# dplyr, corrplot and GGally are available
library(dplyr)
library(corrplot)
library(GGally)
human <- read.table("create_human.csv", sep = "," , header=TRUE)
human <- select(human, -1)
summary(human)
##     Edu2.FM          Labo.FM          Life.Exp        Edu.Exp     
##  Min.   :0.1717   Min.   :0.1857   Min.   :49.00   Min.   : 5.40  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:66.30   1st Qu.:11.25  
##  Median :0.9375   Median :0.7535   Median :74.20   Median :13.50  
##  Mean   :0.8529   Mean   :0.7074   Mean   :71.65   Mean   :13.18  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:77.25   3rd Qu.:15.20  
##  Max.   :1.4967   Max.   :1.0380   Max.   :83.50   Max.   :20.20  
##       GNI            Mat.Mor         Ado.Birth         Parli.F     
##  Min.   :  2.00   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.: 53.50   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 99.00   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 98.73   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.:143.50   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :194.00   Max.   :1100.0   Max.   :204.80   Max.   :57.50

Studying the output of the summaryfunction we can see the distributions of the variables. For example, the average life expectancy is 71.65 years with a minimum of 49 and maximum of 83.5 years.

Next, we will study a graphical overview of the data. I will do this by drawing a ggpairs plot. We will also draw a correlation plot.

# visualize the 'human' variables
ggpairs(human)

# compute the correlation matrix and visualize it with corrplot
cor_human <- cor(human)
cor_human
##                Edu2.FM      Labo.FM   Life.Exp     Edu.Exp          GNI
## Edu2.FM    1.000000000  0.009564039  0.5760299  0.59325156  0.177158634
## Labo.FM    0.009564039  1.000000000 -0.1400125  0.04732183 -0.012970053
## Life.Exp   0.576029853 -0.140012504  1.0000000  0.78943917  0.192862316
## Edu.Exp    0.593251562  0.047321827  0.7894392  1.00000000  0.121349332
## GNI        0.177158634 -0.012970053  0.1928623  0.12134933  1.000000000
## Mat.Mor   -0.660931770  0.240461075 -0.8571684 -0.73570257 -0.100166367
## Ado.Birth -0.529418415  0.120158862 -0.7291774 -0.70356489 -0.133441591
## Parli.F    0.078635285  0.250232608  0.1700863  0.20608156  0.002672262
##              Mat.Mor  Ado.Birth      Parli.F
## Edu2.FM   -0.6609318 -0.5294184  0.078635285
## Labo.FM    0.2404611  0.1201589  0.250232608
## Life.Exp  -0.8571684 -0.7291774  0.170086312
## Edu.Exp   -0.7357026 -0.7035649  0.206081561
## GNI       -0.1001664 -0.1334416  0.002672262
## Mat.Mor    1.0000000  0.7586615 -0.089439999
## Ado.Birth  0.7586615  1.0000000 -0.070878096
## Parli.F   -0.0894400 -0.0708781  1.000000000
corrplot(cor_human, method="square", type="lower", cl.pos="b", tl.pos="d", tl.cex = 0.6)

# cor(human) %>% corrplot was the code given in DataCamp, but I find it slightly confusing to present the data in that way, I prefer the plot I wrote above.

Let’s look at the information for the distribution of each variable. All the variables are unimodally distributed (they have one peak).
* Edu2.FM left-skewed distribution
* Labo.FM left-skewed distribution
* Life.Exp left-skewed distribution
* Edu.Exp slightly left-skewed distribution, almost symmetric
* Mat.Mor right-skewed distribution
* Ado.Birth right-skewed distribution
* Parli.F right-skewed distribution

The plot above shows that the highest correlation (in absolute values) is between the variables Mat.Mor and Life.Exp, with a negative correlation of 0.857. Other very high correlation pairs include:
* Edu.Exp and Life.Exp (positive)
* Mat.mor and Edu.Exp (negative)
* Ado.Birth and Life.Exp (negative)
* Ado.Birth and Edu.Exp (negative)
* Ado.Birth and Mat.Mor (positive)

The corrplotabove shows a visual representation of these correlations very clearly too. As we learned last week, positive correlations are displayed in blue and negative correlations in red color. Color intensity and the size of the square are proportional to the correlation coefficients.

Particularly low correlations occur with the “percentage of female representatives in parliament” variable.

As we can see again, the variables have some rather high correlations between them. I feel that now, however, is a good time to remind about the fact that correlation does not equal causality, and that in social sciences particularly, the relationships between various variables are much more complex than often meets the eye.


Part 2. Principal Component Analysis (PCA)

In PCA, we decompose a data matrix into smaller matrices, allowing us to extract the underlying principal components. Ideally, the variance along these principal components is a reasonable characterization of the complete data set.

There are two different methods of PCA (from linear algebra), the Eigenvalue Decomposition and the Singular Value Decomposition (SVD). The prcomp()function in R uses the SVD, which is the more accurate, and therefore preferred method of PCA.

We will now perform SVD PCA on the humandataset. We will first do this for the non-standardized data.

pca_human <- prcomp(human)
pca_human
## Standard deviations (1, .., p=8):
## [1] 214.3202186  54.4858892  26.3814123  11.4791149   4.0668732   1.6067062
## [7]   0.1905111   0.1586732
## 
## Rotation (n x k) = (8 x 8):
##                     PC1           PC2           PC3           PC4
## Edu2.FM   -0.0007468424  4.707363e-04 -0.0001689810 -0.0004252438
## Labo.FM    0.0002210928  5.783875e-05 -0.0007563442 -0.0047679411
## Life.Exp  -0.0334485095  1.573401e-02 -0.0303860782 -0.0780112042
## Edu.Exp   -0.0098109042  2.441895e-03 -0.0223269972 -0.0369246053
## GNI       -0.0278166075  9.979413e-01  0.0557767706 -0.0002740466
## Mat.Mor    0.9879825869  3.620190e-02 -0.1474236772 -0.0068326906
## Ado.Birth  0.1479114018 -5.046353e-02  0.9867754042 -0.0069374879
## Parli.F   -0.0048164618 -1.494485e-03 -0.0026652825 -0.9962093264
##                     PC5           PC6           PC7           PC8
## Edu2.FM    0.0002964017 -0.0254740423  6.859097e-01  7.272399e-01
## Labo.FM   -0.0034895150 -0.0320024527  7.265303e-01 -6.863629e-01
## Life.Exp  -0.9759219723  0.1979058277  5.461806e-03  2.081465e-03
## Edu.Exp   -0.1947276781 -0.9790080609 -4.054956e-02  3.992920e-03
## GNI        0.0150268685 -0.0014692010 -3.631899e-04 -3.767644e-04
## Mat.Mor   -0.0282554922 -0.0005722709  1.213189e-04  8.299757e-04
## Ado.Birth -0.0393039689 -0.0160315111 -4.359137e-05 -9.463203e-05
## Parli.F    0.0841200882  0.0210694313 -2.695182e-03  2.658636e-03

This dataset has eight principal components. The analysis first shows the standard deviations of the components, and then the variability.
The first principal component PC1 captures the maximum amount of variance form the features in the original data.
PC2 is statistically independent from the first, and captures the maximum variability that is left.
The same applies for rest of the principal components: they are all non-correlated and each is less important than the last one in terms of variance captured.
Now, let’s look at a biplot of the above data.

# create and print out a summary of pca_human
s <- summary(pca_human)
s
## Importance of components:
##                             PC1      PC2      PC3      PC4     PC5     PC6
## Standard deviation     214.3202 54.48589 26.38141 11.47911 4.06687 1.60671
## Proportion of Variance   0.9233  0.05967  0.01399  0.00265 0.00033 0.00005
## Cumulative Proportion    0.9233  0.98298  0.99697  0.99961 0.99995 1.00000
##                           PC7    PC8
## Standard deviation     0.1905 0.1587
## Proportion of Variance 0.0000 0.0000
## Cumulative Proportion  1.0000 1.0000
# rounded percentages of variance captured by each PC
pca_pr <- round(100*s$importance[2,], digits = 1) 

# print out the percentages of variance
pca_pr
##  PC1  PC2  PC3  PC4  PC5  PC6  PC7  PC8 
## 92.3  6.0  1.4  0.3  0.0  0.0  0.0  0.0
# create object pc_lab to be used as axis labels
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")

# draw a biplot
biplot(pca_human, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2], main="PCA Biplot with Non-Standardized Variables: what a giant mess!")
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

As a note about this biplot, the observations by the first two principal components are displayed on the x- and y-axis, and the arrows are used to visualize the connections between the original variables and the PC’s.

The angles between the arrows that represent the original variables show the correlations between the variables. Small angle represents a high positive correlation.

The angle between a variable and the PC axis show the correlation between the two. Again, a small angle represents a high positive correlation.

The length of the arrows proportionally show us the standard deviation of the variables.


Part 3. Principal Component Analysis (PCA) with Standardized Variables

It is worth noting that PCA is sensitive to scaling and assumes that features (variables) with larger variances are more important than those with smaller variances. Therefore, scaling the variables used in PCA is a good idea. Let’s do that now.

# human_std <- scale(human) was the way that I have scaled in the previous weeks. Now I found the following:
pca_human <- prcomp(human, scale. = TRUE)
pca_human
## Standard deviations (1, .., p=8):
## [1] 1.9658091 1.1387842 0.9899881 0.8659777 0.6993077 0.5400078 0.4670082
## [8] 0.3317153
## 
## Rotation (n x k) = (8 x 8):
##                   PC1         PC2         PC3         PC4         PC5
## Edu2.FM   -0.38438136  0.04973416 -0.08668975  0.31301506 -0.82578999
## Labo.FM    0.06452258  0.72023430 -0.10293459  0.59483194  0.20499428
## Life.Exp  -0.46758272 -0.01554845  0.02106889 -0.09983270  0.19702138
## Edu.Exp   -0.44611813  0.14839516  0.06836143  0.09857478  0.22718207
## GNI       -0.11121526 -0.01842917 -0.97298099 -0.16503899  0.07296310
## Mat.Mor    0.47036562  0.12782929 -0.11999280  0.04185463  0.04609075
## Ado.Birth  0.43435393  0.06675352 -0.06553107 -0.05173116 -0.38364001
## Parli.F   -0.09031518  0.65984104  0.10671258 -0.70487388 -0.17604400
##                    PC6          PC7         PC8
## Edu2.FM    0.139911426  0.100381612  0.18084014
## Labo.FM    0.008135958 -0.256990684 -0.06742229
## Life.Exp  -0.393884789 -0.421527811  0.63171658
## Edu.Exp   -0.415647904  0.716119209 -0.16542557
## GNI        0.005876052  0.009858605 -0.08891751
## Mat.Mor    0.085633294  0.472803692  0.71642517
## Ado.Birth -0.792816131 -0.096488804 -0.12191407
## Parli.F    0.128550324  0.020403204 -0.01688909
# create and print out a summary of pca_human
s <- summary(pca_human)
s
## Importance of components:
##                          PC1    PC2    PC3     PC4     PC5     PC6     PC7
## Standard deviation     1.966 1.1388 0.9900 0.86598 0.69931 0.54001 0.46701
## Proportion of Variance 0.483 0.1621 0.1225 0.09374 0.06113 0.03645 0.02726
## Cumulative Proportion  0.483 0.6452 0.7677 0.86140 0.92253 0.95898 0.98625
##                            PC8
## Standard deviation     0.33172
## Proportion of Variance 0.01375
## Cumulative Proportion  1.00000
# rounded percentages of variance captured by each PC
pca_pr <- round(100*s$importance[2,], digits = 1) 

# print out the percentages of variance
pca_pr
##  PC1  PC2  PC3  PC4  PC5  PC6  PC7  PC8 
## 48.3 16.2 12.3  9.4  6.1  3.6  2.7  1.4
# create object pc_lab to be used as axis labels
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")

# draw a biplot
biplot(pca_human, cex = c(0.6, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2], main="PCA Biplot with Standardized Variables: a nicely readable plot!")

The first biplot is very difficult to read and therefore interpret, and unfortunately at present I lack the skills to make it more readable. However, the results of both analysis are clearly different. The standardized analysis is the one we should concentrate on, and luckily that is the one where the biplot is readable. The reason for concentrating on the standardized analysis (and the reason why the analysis are different) was mentioned above: PCA is sensitive to scaling and assumes that features (variables) with larger variances are more important than those with smaller variances.

Including the percentages for the first two principal components in the biplot tells us that the first two principal components PC1 and PC2 account for 98.3% of variance in the original data in the non-standardized analysis and for 64.5% in the standardized version. This again demonstrates why standardizing the variables is a good idea: in the non-standardized version the model places too much importance on the larger variances, making the entire model badly skewed. We would ideally include more PC’s in our standardized analysis to capture more of the variance. The PC’s selected for the analysis should capture around 80% of the variance to be reliable, therefore 3 PC’s (77%) or 4 PC’s (86%) would be good.

Next, we will look at the correlations. If we look at, for example, the angle of the arrows that represent Mat.Mor and Ado.Birth, we can see that the angle is quite small between the two arrows. This corresponds to a high positive correlation (0.7586615). The angles of the arrows are the same in both biplots, since the correlations are identical for standardized and non-standardized data.

The lengths of the arrows - representing the standard deviations of the variables - vary significantly between the non-standardized and standardized analyses. This is due to the fact that the SD results change dramatically once we standardize our data. The standard deviations are much more uniform in size in the standardized analysis.


Part 4. Interpretations of PC1 and PC2 Dimensions

Looking at the biplot drawn after PCA on the standardized human data, we can see that there is one major cluster (call this C1), and a smaller cluster almost directly below it (C2). There is also a more scattered cluster diagonally to the top-right from the main cluster (C3). The rest of the countries (shown as numbers) are scattered around more or less randomly.

Since the clusters of countries C1 and C3 are different based on PC1, such differences are likely to be due to the variables that have heavy influences on PC1. Those variables, as can be seen in the PCA table, are Life.Exp, Edu.Exp, Mat.Mor and Ado.Birth.

As the C1 and C2 clusters are different based on PC2, then the variables that heavily influence PC2 are likely to be responsible. Those variables, per the PCA table, are Labo.FM and Parli.F.

The arrows in relation to one another can be interpreted as follows:

When two arrows form a small angle, the variables are positively correlated.
Example: Mat.Mor and Ado.Birth (0.76 correlation).
When they meet each other at around 90°, they are not likely to be correlated.
Example: Edu2.FM and Labo.FM (0.01).
When they form a large angle (close to 180°), they are negatively correlated.
Example: Mat.Mor and Life.Exp (-0.86).

So, let’s hazard a guess at what some of this means.

Looking at the countries in cluster 3, we find countries in Sub-Saharan Africa: Senegal, Malawi, Lesotho, Ethiopia and Kenya. Some of the countries in cluster 1 include Cyprus, Lithuania, and Thailand. Seeing these countries now helps the whole scenario make more sense: very poor countries are in one cluster, separated from better-to-do countries by variances in aspects such as maternal mortality and life expectancy. The point is further illuminated as we look for the richer countries in the world, such as Norway and Denmark, and find them even further away to the left on the PC1 axis from the Sub-Saharan countries. Hence we can probably conclude that the differences in these variables separate rich, middle and poor countries in the world.

But what about the PC2 axis? There, in cluster 2, are countries such as Cyprus and Sri Lanka, who are separated by the variances in “percentage of female representatives in parliament” and “proportion of females to males in the workforce” variables, from the fairly-well-to-do countries mentioned above.

The observations I’ve made above got me thinking about changing the plot further. The countries in the HDI index are listed according to the Human Development Index score they received. How about if we color-code the countries according to the scores they received. What would the plot reveal then?

In the HDI technical notes, the following groupings can be found:

Very high human development: 0.800 and above
High human development: 0.700-0.799
Medium human development: 0.550-0.699
Low human development: Below 0.550

So, let’s use those groupings to color our countries.

First things first (and bear with me here, this might take up some space), I need to create a new dataset humansthat will include the HDI score. I did this in a separate R script file to save space, hence the hashtags below…

# humans <- read.table("http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/human1.txt", sep  =",", header = T)
library(stringr)
library(dplyr)
# str_replace(humans$GNI, pattern=",", replace ="") %>% as.numeric$GNI
# humans$GNI <- as.numeric(humans$GNI)
# keep <- c("HDI", "Country", "Edu2.FM", "Labo.FM", "Life.Exp", "Edu.Exp", "GNI", "Mat.Mor", "Ado.Birth", "Parli.F")
# humans <- select(humans, one_of(keep))
# complete.cases(humans)
# data.frame(humans[-1], comp = complete.cases(humans))
# humans <- filter(humans, complete.cases(humans))
# tail(humans, 10)
# last <- nrow(humans) - 7
# humans <- humans[1:last, ]
# rownames(humans) <- humans$Country
# humans <- select(humans, -Country)
# str(humans)
# write.csv(humans, file = "create_humans.csv", row.names=TRUE)

Then, we need to add a column to our humans dataset that specifies the HDI quality as per the technical notes.

humans <- read.table("create_humans.csv", sep = "," , header=TRUE)
humans$devt[humans$HDI>=0.800]<-"Very High"
humans$devt[humans$HDI>=0.700 & humans$HDI<0.799]<-"High"
humans$devt[humans$HDI>=0.550 & humans$HDI<0.699]<-"Medium"
humans$devt[humans$HDI<0.550]<-"Low"

Now we are ready to standardize the dataset, perform PCA (excluding HDI as it wasn’t part of the previous analysis set, and devt as it is only required for classification purposes), and draw the biplot.

pca_humans <- prcomp(~ . -X -HDI -devt, data = humans, scale. = TRUE)

# create and print out a summary of pca_human
s <- summary(pca_humans)
s
## Importance of components:
##                          PC1    PC2    PC3     PC4     PC5     PC6     PC7
## Standard deviation     1.966 1.1388 0.9900 0.86598 0.69931 0.54001 0.46701
## Proportion of Variance 0.483 0.1621 0.1225 0.09374 0.06113 0.03645 0.02726
## Cumulative Proportion  0.483 0.6452 0.7677 0.86140 0.92253 0.95898 0.98625
##                            PC8
## Standard deviation     0.33172
## Proportion of Variance 0.01375
## Cumulative Proportion  1.00000

Then, as I was trying to get the plot coloured as I wanted it, I came across this excellent tutorial!

# First I needed to access some stuff
# install.packages("devtools")
# install.packages("R6")
library(devtools)
# devtools::install_github("vqv/ggbiplot")

library(ggbiplot)
## Loading required package: plyr
## -------------------------------------------------------------------------
## You have loaded plyr after dplyr - this is likely to cause problems.
## If you need functions from both plyr and dplyr, please load plyr first, then dplyr:
## library(plyr); library(dplyr)
## -------------------------------------------------------------------------
## 
## Attaching package: 'plyr'
## The following objects are masked from 'package:plotly':
## 
##     arrange, mutate, rename, summarise
## The following objects are masked from 'package:dplyr':
## 
##     arrange, count, desc, failwith, id, mutate, rename, summarise,
##     summarize
## Loading required package: scales
## Loading required package: grid
g <- ggbiplot(pca_humans, obs.scale = 1, var.scale = 1, 
              groups = humans$devt, ellipse = TRUE, 
              circle = TRUE)
g <- g + scale_color_discrete(name = '')
g <- g + theme(legend.direction = 'vertical', 
               legend.position = 'right')
print(g)

Now, this plot I find much easier to interpret. The colors shows us the clusters much more easily, and I feel they are needed as the previous method did not make seeing the clusters very easy at all. This is largely due to the fact that the clusters are not tightly clustered.

Most differences are the result the variables that have heavy influences on PC1 axis. As observed previously, those variables are Life.Exp, Edu.Exp, Mat.Mor and Ado.Birth. It makes perfect sense that countries with a higher score in the Human Development Index are furthest away from the lowest scoring countries, and they are ordered (as expected) Very High-> High -> Medium-> Lowon the first principal component axis. It is worth noting that the low HDI index countries are much more spread out on the plot, indicating more variances between their variables than in those of the higher HDI score countries. In fact, the higher in the index we go, the tighter the clusters appear.

A couple of noteworthy points about the above plot:
- The data ellipses capture 68% (default) for each of the 4 HDI indicators in the data. - The large circle shows the theoretical maximum reach of the arrows.


Part 5. It’s MCA Tea Time!

Let’s have some tea, shall we!

Multiple Correspondence Analysis (MCA) is an extension of Correspondence Analysis (CA). It allows us to analyze the pattern of relationships found within several categorical dependent variables.

MCA can also be viewed as a generalization of PCA: variables analyzed are categorical instead of quantitative. A very helpful article on MCA in R can be found here.

So, let’s put MCA to use to find groups of individuals with similar profiles in their answers to the survey questions and also to find the associations between variable categories.

First, we will access some packages and the teadataset. The original dataset consists of 300 observations and 36 variables. The data is the results of a survey of tea drinkers.
The ?tea query gives us the following information:

“The data used here concern a questionnaire on tea. We asked to 300 individuals how they drink tea (18 questions), what are their product’s perception (12 questions) and some personal details (4 questions).”

AND

“A data frame with 300 rows and 36 columns. Rows represent the individuals, columns represent the different questions. The first 18 questions are active ones, the 19th is a supplementary quantitative variable (the age) and the last variables are supplementary categorical variables.” We will choose which columns to keep for our dataset, look at the summaries and structure of the data, and then do some simple visualizations.

# the tea dataset and packages FactoMineR, ggplot2, dplyr and tidyr are available
library(FactoMineR)
data(tea)
library(ggplot2)
library(dplyr)
library(tidyr)

# column names to keep in the dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")

# select the 'keep_columns' to create a new dataset
tea_time <- select(tea, one_of(keep_columns))

# look at the summaries and structure of the data
summary(tea_time)
##         Tea         How                      how           sugar    
##  black    : 74   alone:195   tea bag           :170   No.sugar:155  
##  Earl Grey:193   lemon: 33   tea bag+unpackaged: 94   sugar   :145  
##  green    : 33   milk : 63   unpackaged        : 36                 
##                  other:  9                                          
##                   where           lunch    
##  chain store         :192   lunch    : 44  
##  chain store+tea shop: 78   Not.lunch:256  
##  tea shop            : 30                  
## 
str(tea_time)
## 'data.frame':    300 obs. of  6 variables:
##  $ Tea  : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How  : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ how  : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
# visualize the dataset
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped

The data frame now consists of 300 observations and 6 variables. The bar plots nicely help visualize the spread of each variable.

Now we will perform the MCA and print its summary, as well as visualize it.

# multiple correspondence analysis
mca <- MCA(tea_time, graph = FALSE)

# summary of the model
summary(mca)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6
## Variance               0.279   0.261   0.219   0.189   0.177   0.156
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953
##                        Dim.7   Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.144   0.141   0.117   0.087   0.062
## % of var.              7.841   7.705   6.392   4.724   3.385
## Cumulative % of var.  77.794  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898
##                       cos2  v.test     Dim.3     ctr    cos2  v.test  
## black                0.003   0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            0.027   2.867 |   0.433   9.160   0.338  10.053 |
## green                0.107  -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone                0.127  -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                0.035   3.226 |   1.329  14.771   0.218   8.081 |
## milk                 0.020   2.422 |   0.013   0.003   0.000   0.116 |
## other                0.102   5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag              0.161  -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged   0.478  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged           0.141  -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |
# visualize MCA
plot(mca, invisible=c("ind"), habillage = "quali")

Let’s look at the output of the MCA summary first.

The Eigenvalues section at the top show the variances and the percentages of variances retained by each dimension.

Then in the next section - Individuals - we have the individuals coordinates (e.g. Dim.1), the individuals percentage contribution on each dimension (ctr) and the the squared correlations (cos2) on the dimensions.

The Categories table specifies the coordinates of the variable categories (e.g. Dim.1), the contribution percentage (ctr), the squared correlations (cos2) and the v.test value. The v.test follows normal distribution: if the value is ± 1.96, the coordinate is significantly different from zero. We can see that this is the case for all of the listed categories except alone and other.

Finally, we have the Categorical variables (eta2). It shows the squared correlation between each variable and the dimension. A value close to one indicates a strong connection between the variable and the dimension. Variables howand where, although not very close to one, are the closest at around 0.7, on the first dimension.

We can use the invisible argument in the code to choose whether to plot individuals, variances, or both. We can do this by changing the argument to “none”, “ind”, or “var”. (There is also the option of plotting the supplementary or background variables). By turning “ind” invisible, the above plots variances.

The MCA factor map can be interpreted by looking at the distance between variable categories. This shows a measure of their similarity. In our plot, the variables chain storeand tea baghave a high degree of similarity. This tells us that individuals who buy their tea from chain stores drink mainly teabag tea, and vice versa. blackand no sugarare also very similar, indicating that people who do not use sugar in their tea also do not use milk.

Let’s plot individuals and then both.

Note: The habillageargument can be changed to specify the qualitative variable (by its index or name), and used for coloring individuals by groups.

I experimented with all the variables as the habillage argument, and found that the clearest groupings were found by using howand where (note that these are the same variables that were closest to one in categorical variables). In fact, those two yielded remarkably similar results. For this analysis, I chose how. Let’s see how howmakes our plot look:

plot(mca, invisible=c("var"), habillage = "how")

Let’s also look at the plot that includes both the variables and the individuals.

plot(mca, invisible=c("none"), habillage = "how")

What if we want to run MCA on the entire teadataset? By accessing ?tea we find the following code for MCA. The arguments used specify that variable number 19 is a supplementary quantitative variable and that arguments 20 through to 36 are supplementary variables. Therefore it follows that variables 1 through to 18 are active variables.

The following code now excludes graph = FALSE and therefore the default graphs are displayed.

res.mca=MCA(tea,quanti.sup=19,quali.sup=20:36)

Let’s next look at how these plots can be interpreted. A very helpful video can be found here.

Graph 1: MCA factor map - This plot shows the individuals. There isn’t a clear cluster of individuals here, so there isn’t much to say about this plot.

Now bear with me please while I present the graphs out of order, Graph 2 will come last, and we’ll soon see why.

Graph 3: Plot of the variables - this graph shows which variables are connected to the two dimensions. In our example, the variable whereis linked to both dimensions, as are priceand how (just not as strongly). We can also see that at the bottom left of the graph there is a whole bunch of variables that are not at all strongly linked to the dimensions.

Graph 4: Supplementary variables on the MCA factor map - graph and the circle within it are similar to the one in PCA. If the arrow is close to the circle, then the correlation between the variable and the two dimensions is strong. As we can see in our example, ageclearly is not strongly correlated with the dimensions in question.

Graph 2: MCA factor map - This graph of categories is more interesting than the second graph. However, the problem here is that there are many categories, making the graph difficult to read. We can use the FactoMineRfunction to make the information more legible. Let’s do that below. I have included code comments below.

plot(res.mca, invisible=c("ind", "quali.sup"), selectMod="contrib 10", cex=0.8) 

# here we first code the plot, making ind and quali.sup invisible, just like we did in the example given in DataCamp. This time we also make the font smaller for improved readability. Next, we choose the 10 categories that are the best represented on the 2 dimension. Please note that the points that are no longer included are still drawn transparent. 

The above chart now shows us the ten categories that are the best represented on the 2 dimension. I find it quite interesting that a pricecategory factor p_upscale is this strongly present on the second dimension, along with the factor tea shop from the where category and the factor unoackaged from the how category.

Finally, let’s look at one more plot, just out of curiosity (and hope we don’t end the way of the proverbial cat :) ).

plot(res.mca, invisible = c("var","quali.sup"), habillage = "frequency")

This plot is rather handy in seeing certain characteristics of survey respondents in graphical form. As we can see, the colour green seems to dominate, indicating that most of the teasurvey respondents like to drink a nice cuppa more than twice a day. I suspect that this survey was carried out in Britain, and if this was the case, I do not doubt the accuracy of this assessment one bit!

That’s all for this week. Thanks so much for reading this to the very end, and have a great week!


6. Analysis of Longitudinal Data

Hello!!

Welcome to the last of the weekly analysis entries! This week was interesting. Not only were there some major issues (more on those later), I also thought I’d experiment with the layout of the html some…

This week we will go back to building statistical models. Here is a quote I borrowed from the course instructions:

“The new challenge here is that the data may (and will) include two types of dependencies simultaneously: In addition to the more or less correlated variables that we have faced with all models and methods so far, the observations of the data will also be intercorrelated. Usually (in the above mentioned models and methods), we can (often pretty safely) assume that the observations are independent of each other. However, in longitudinal data this assumption seldom holds, because we have multiple observations or measurements of the same individuals. The concept of repeated measures highlights this phenomenon that is actually quite typical in many applications. Both types of dependencies must be taken into account; otherwise the models will be biased.”

To cope with the type of setting described above, we will be applying linear mixed effects models.

Prior to creating this diary entry, I performed some data wrangling on two data sets, BPSR and RATS. Here is some information on the two datasets:

“40 male subjects were randomly assigned to one of two treatment groups and each subject was rated on the brief psychiatric rating scale (BPRS) measured before treatment began (week 0) and then at weekly intervals for eight weeks. The BPRS assesses the level of 18 symptom constructs such as hostility, suspiciousness, hallucinations and grandiosity; each of these is rated from one (not present) to seven (extremely severe). The scale is used to evaluate patients suspected of having schizophrenia” (from DataCamp).

RATS is from a “nutrition study conducted in three groups of rats (Crowder and Hand, 1990). The three groups were put on different diets, and each animal’s body weight (grams) was recorded repeatedly (approximately weekly, except in week seven when two recordings were taken) over a 9-week period. The question of most interest is whether the growth profiles of the three groups differ”. (p. 22 here).

So, let’s get to work, shall we!

Just to add interest (and some challenges), this week we are swapping data sets. In practice, this means that instead of conducting Chapter 8 of MABS exercises using the BPSR data we are using RATS. Same with Chapter 9, BPSR instead of RATS.

Oh, rats!

Please note that from section 1C onwards I encountered a serious problem with R Markdown. The chapter itself knitted absolutely fine to html, but as I tried to knit my index.Rmd file, the knitting failed. After various attempts at fixing the issue, and spending 2 days with it, I had to admit defeat. So, in order to have a chapter to submit at all this week, I have had to disable various code chunks. Where this is the case, I have included a screen shot of my codes and outputs. My apologies for the resultant somewhat messy report this week!


Part 1. Implementing the Analyses of Chapter 8 of MABS Using the RATS Data

Turns out I need to include the code from meet_and_repeat.R here in order for the Markdown to knit to html…

library(dplyr)
library(tidyr)
RATS <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/MABS/master/Examples/data/rats.txt", header = TRUE, sep = '\t')

RATS$ID <- factor(RATS$ID)
RATS$Group <- factor(RATS$Group)
RATSL <- RATS %>%
  gather(key = WD, value = Weight, -ID, -Group) %>%
  mutate(WDS = as.integer(substr(WD,3,4))) 

1A. Individuals on the plot

First, let’s look at some graphical representations of the long RATS data.

To begin we shall plot the Weight values for all 16 rats, and we will differentiate between the diet groups into which the rats have been placed. These simple graphs make a number of features of the data readily apparent.

#Access the package ggplot2
library(ggplot2)

# Draw the plot
p <- ggplot(RATSL, aes(x = WDS, y = Weight, col = ID))
p1 <- p + geom_line()
p2 <- p1 + scale_linetype_manual(values = rep(1:10, times=4))
p3 <- p2 + facet_grid(. ~ Group, labeller = label_both)
p4 <- p3 + theme(legend.position = "none")
p5 <- p4 + scale_y_continuous(limits = c(min(RATSL$Weight), max(RATSL$Weight)))
p5

Inspecting the graphs above, we can see how the rats’ weight increases over time in all the different groups. These plots are helpful in visualizing the weight growth profiles of individual rats. As we can see, rats with lower starting body weight have been grouped together, and that group shows the least actual (as opposed to proportional) weight increase.

1B. The Golden Standardise

Let’s take a look at how our data behaves once we standardize the Weightvariable. We will perform this by subtracting the relevant mean from the original value and then dividing by the standard deviation. Here’s the math for it:

\[standardized(x)=x-\frac{mean(x)}{sd(x)}\]

And here comes the code for the graph:

# Standardise the variable Weight
RATSL <- RATSL %>%
  group_by(WDS) %>%
  mutate(stdweight = (Weight - mean(Weight))/sd(Weight) ) %>%
  ungroup()

# Glimpse the data
glimpse(RATSL)
## Observations: 176
## Variables: 6
## $ ID        <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 1...
## $ Group     <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 1, 1...
## $ WD        <chr> "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD...
## $ Weight    <int> 240, 225, 245, 260, 255, 260, 275, 245, 410, 405, 44...
## $ WDS       <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 8...
## $ stdweight <dbl> -1.1362255, -1.2541867, -1.0969051, -0.9789439, -1.0...
# Plot again with the standardised bprs
ggplot(RATSL, aes(x = WDS, y = stdweight, col = ID)) +
  geom_line() +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  scale_y_continuous(name = "standardized Weight")

Interesting results! In Group 1, then standardized weights of the rats were much closer to one another at the end of the experiment - see how the lines get closer to each other towards the end?

In Group 2 we can see all sorts going on. The rats with the lowest standardized weight had a definite increase, while the largest rat had a moderate increase and the medium-sized one’s standardized weight declined.

In Group 3 the weights were closer to one another at the end of the experiment, similarly to Group 1.

1C. Good things come in Summary graphs

With a fairly large numbers of observations, graphical displays of individual profiles are often of little use and researchers commonly produce graphs showing average (mean) profiles for each group along with some indication of the variation of the observations at each time point, in this case the standard error of mean:

\[se=\frac{sd(x)}{\sqrt{n}}\]

Although in our RATS dataset the number of individuals is much smaller (16) than in the BPRS set (40), we will do this nonetheless.

Below are the first of the screen prints that became necessary

code chunk

code chunk

console output

console output

graph

graph

Note that unlike in the BPRS data, there is no overlap in the mean profiles of the three groups. This suggests there is a difference between the three groups in respect to the mean Weight values. Little rats remain little rats!

1D. Find the outlaw… Outlier!

It is time to find the outlier rat - if there is one! We will look into the post-diet values of RATS. The mean of the 9 weeks of data will be our summary measure. We will first calculate this measure and look at boxplots of the measure for each diet group.

In the below code chunk, everything works fine up until the ggplot part, suggesting that that is where the problem lies… However, for the life of me I can NOT figure out why it fails when knitting the index, and not before… So, after the output you will again find my screen prints.

RATSL9S <- RATSL %>%
  filter(WDS > 1) %>%
  group_by(Group, ID) %>%
  summarise(mean=mean(Weight)) %>%
  ungroup()

glimpse(RATSL9S)
## Observations: 1
## Variables: 1
## $ mean <dbl> 386.3375
ggplot code

ggplot code

graph

graph

The group that shows us the most interesting results here is Group 2. The mean is on the very edge of the box, and the distribution is very skewed. The second group also shows us that one outlier (outlaw rat!), near the 600 gram mark. Let’s remove that pesky rodent…

And again, an unfortunate screenprint coming your way…

code AND graph

code AND graph

Excluding the outliers changes the box plots quite significantly, especially for Group 2. The boxes being so short means that the standardized weights of the rats in each group are very close to one another.

1E. T for test and A for Anova

The next step that we will carry out is a more formal test to assess the differences between the diet groups. This is where a t-test would come in, if we were working on the BPSR data. The t-test is appropriate when there are two group means. For a comparison of more than two group means (as in RATS) the one-way analysis of variance (ANOVA) is the appropriate method instead of the t-test.

The baseline measurements of the outcome are often correlated with the summary measure in longitudinal study. We will include the baseline in the analysis, that is the starting Weight of the rats on WD1.

As I had to disable the code chunk above, the code that would normally appear below, cannot recognize RATSL9S, so I’ve had to disable the one below also… More screenprints!

code AND output

code AND output

When we look at the baseline, the significance is great.But here we can also see that the variable Group achieves the 95% confidence interval, meaning that the diet group that the rats belong in is significant.

Did I already say “Ah, rats!”? Well, we can say that again! The Index file knitting issue remains a mystery. Let’s hope we have more luck with the second part…


Part 2. Implementing the Analyses of Chapter 9 of MABS Using the BPRS Data

We will start with the code from meet_and_repeat.R again…

BPRS <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/MABS/master/Examples/data/BPRS.txt", sep  =" ", header = T)
BPRS$treatment <- factor(BPRS$treatment)
BPRS$subject <- factor(BPRS$subject)
BPRSL <-  BPRS %>% gather(key = weeks, value = bprs, -treatment, -subject)
BPRSL <-  BPRSL %>% mutate(week = as.integer(substr(weeks,5,5)))

2A. Plot first, ask questions later

Let’s begin by plotting the data and identifying the observations in each treatment group.

# Plot the BPRSL data
ggplot(BPRSL, aes(x = week, y = bprs, group = subject)) +
  geom_line(aes(col = treatment)) +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ treatment, labeller = label_both) +
  theme(legend.position = "none") + 
  scale_y_continuous(limits = c(min(BPRSL$bprs), max(BPRSL$bprs)))

Here we can see all the individuals in each treatment group, and how their bprs score behaves throughout the weeks. In the DataCamp exercise (for RATS data), the different groups were all presented in the same graph. However, that does not work with our BPRS data as there are so many subjects. Hence the two separate graphs for the two treatment groups. We can see that, for the most part, the bprs scores are declining throughout the weeks, indicating that the treatments are at least somewhat effective. Another observation of interest here is that especially within the treatment 1 group, the scores tend to converge towards the end of the treatment weeks. As is the case so often, though, there are exceptions to the rule: we can observe two subjects in each group whose bprs scores start increasing before the treatment weeks are over.

2B. Holding on to independence: The Linear model

To begin this analysis, we will ignore some home truths! We will pretend that all of the data observations are independent of one another, and ignore the fact that 18 bprs scores come from the same subject. We have a data set consisting of 360 observations that can easily be analyzed using linear regression.

In this section, we will fit a linear regression model. The response variable is bprs and the explanatory variables are week and treatment.

# create a regression model BPRS_reg
BPRS_reg <- lm(bprs ~ week + treatment, data = BPRSL)

# print out a summary of the model
summary(BPRS_reg)
## 
## Call:
## lm(formula = bprs ~ week + treatment, data = BPRSL)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -22.454  -8.965  -3.196   7.002  50.244 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  46.4539     1.3670  33.982   <2e-16 ***
## week         -2.2704     0.2524  -8.995   <2e-16 ***
## treatment2    0.5722     1.3034   0.439    0.661    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 12.37 on 357 degrees of freedom
## Multiple R-squared:  0.1851, Adjusted R-squared:  0.1806 
## F-statistic: 40.55 on 2 and 357 DF,  p-value: < 2.2e-16

The summary table above shows the results from fitting a linear regression model with the bprs score as the response variable and treatment and week as the explanatory variables. This model ignores the repeated-measures structure of the data.

In this summary, a worthwhile observation is the significance of the regression week. Also, conditional on week, treatment group 2 differs from treatment group 1.

So, what does a graphical representation of the above data look like?

p1 <- ggplot(BPRS_reg, aes(x = week, y = bprs, col = treatment))  
p2 <- p1 + geom_point()  
p3 <- p2 + geom_smooth(method = "lm")
p3

According to this graph, the bprs scores of treatment group 1 start higher and decline faster throughout the weeks than the scores of the subjects in group 2. This indicates that the treatment that the subjects receive in group 1 is more effective.

2C. The Random Intercept Model

The model that we created in the previous step assumed that the weekly bprs scores are independent of one another. However, this is highly unlikely. Now we will move onto looking at some more appropriate models and graphical representations of our data.

We will fit a random intercept model of the bprs score data using week and treatment as the explanatory variables. This enables the linear regression fit for each subject to differ in intercept from the other subjects.

The package we will use is lme4. It offers some handy tools for linear and generalized linear mixed-effects models. We will be using the now-familiar ~ operator and also the vertical bar for distinguishing random-effects terms.

# access library lme4
library(lme4)
## Loading required package: Matrix
## 
## Attaching package: 'Matrix'
## The following object is masked from 'package:tidyr':
## 
##     expand
# Create a random intercept model
BPRS_ref <- lmer(bprs ~ week + treatment + (1 | subject), data = BPRSL, REML = FALSE)

# Print the summary of the model
summary(BPRS_ref)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week + treatment + (1 | subject)
##    Data: BPRSL
## 
##      AIC      BIC   logLik deviance df.resid 
##   2748.7   2768.1  -1369.4   2738.7      355 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.0481 -0.6749 -0.1361  0.4813  3.4855 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  subject  (Intercept)  47.41    6.885  
##  Residual             104.21   10.208  
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  46.4539     1.9090  24.334
## week         -2.2704     0.2084 -10.896
## treatment2    0.5722     1.0761   0.532
## 
## Correlation of Fixed Effects:
##            (Intr) week  
## week       -0.437       
## treatment2 -0.282  0.000

The standard deviation of the subject is 6.885.

2D. Slippery slopes: Random Intercept and Random Slope Model

Now we will fit a random intercept and random slope model to the bprs scores data. This will allow the linear regression fits for each individual to differ in not only intercept but also in slope. This will make it possible to account for the differences in the subjects’ bprs scores and also the effects of time.

# create a random intercept and random slope model
BPRS_ref1 <- lmer(bprs ~ week + treatment + (week | subject), data = BPRSL, REML = FALSE)

# print a summary of the model
summary(BPRS_ref1)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week + treatment + (week | subject)
##    Data: BPRSL
## 
##      AIC      BIC   logLik deviance df.resid 
##   2745.4   2772.6  -1365.7   2731.4      353 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.8919 -0.6194 -0.0691  0.5531  3.7976 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr 
##  subject  (Intercept) 64.8202  8.0511        
##           week         0.9608  0.9802   -0.51
##  Residual             97.4307  9.8707        
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  46.4539     2.1052  22.066
## week         -2.2704     0.2977  -7.626
## treatment2    0.5722     1.0405   0.550
## 
## Correlation of Fixed Effects:
##            (Intr) week  
## week       -0.582       
## treatment2 -0.247  0.000
# perform an ANOVA test on the two models
anova(BPRS_ref1, BPRS_ref)
## Data: BPRSL
## Models:
## BPRS_ref: bprs ~ week + treatment + (1 | subject)
## BPRS_ref1: bprs ~ week + treatment + (week | subject)
##           Df    AIC    BIC  logLik deviance  Chisq Chi Df Pr(>Chisq)  
## BPRS_ref   5 2748.7 2768.1 -1369.4   2738.7                           
## BPRS_ref1  7 2745.4 2772.6 -1365.7   2731.4 7.2721      2    0.02636 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The chi-squared statistic of the likelihood ratio test between BPRS_ref and BPRS_ref is within the 95% level of significance, indicating a good fit for the model.

2E. Time to interact: Random Intercept and Random Slope Model with interaction

Now for the final part: fitting a random intercept and slope model that allows for a treatment group x time interaction.

# create a random intercept and random slope model
BPRS_ref2 <- lmer(bprs ~ week * treatment + (week | subject), data = BPRSL, REML = FALSE)

# print a summary of the model
summary(BPRS_ref2)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week * treatment + (week | subject)
##    Data: BPRSL
## 
##      AIC      BIC   logLik deviance df.resid 
##   2744.3   2775.4  -1364.1   2728.3      352 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.0512 -0.6271 -0.0767  0.5288  3.9260 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr 
##  subject  (Intercept) 65.0016  8.0624        
##           week         0.9688  0.9843   -0.51
##  Residual             96.4699  9.8219        
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##                 Estimate Std. Error t value
## (Intercept)      47.8856     2.2522  21.262
## week             -2.6283     0.3589  -7.323
## treatment2       -2.2911     1.9090  -1.200
## week:treatment2   0.7158     0.4010   1.785
## 
## Correlation of Fixed Effects:
##             (Intr) week   trtmn2
## week        -0.650              
## treatment2  -0.424  0.469       
## wek:trtmnt2  0.356 -0.559 -0.840
# perform an ANOVA test on the two models
anova(BPRS_ref2, BPRS_ref1)
## Data: BPRSL
## Models:
## BPRS_ref1: bprs ~ week + treatment + (week | subject)
## BPRS_ref2: bprs ~ week * treatment + (week | subject)
##           Df    AIC    BIC  logLik deviance  Chisq Chi Df Pr(>Chisq)  
## BPRS_ref1  7 2745.4 2772.6 -1365.7   2731.4                           
## BPRS_ref2  8 2744.3 2775.4 -1364.1   2728.3 3.1712      1    0.07495 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# draw the plot of BPRSL
ggplot(BPRSL, aes(x = week, y = bprs, group = subject)) +
  geom_line(aes(linetype = treatment)) +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ treatment, labeller = label_both) +
  theme(legend.position = "none") + 
  scale_y_continuous(limits = c(min(BPRSL$bprs), max(BPRSL$bprs)))

# Create a vector of the fitted values
Fitted <- fitted(BPRS_ref2)

# Create a new column fitted to BPRSL
BPRSL <- BPRSL %>%
  mutate(Fitted)

# draw the plot of BPRSL
ggplot(BPRSL, aes(x = week, y = Fitted, group = subject)) +
  geom_line(aes(linetype = treatment)) +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ treatment, labeller = label_both) +
  theme(legend.position = "none") + 
  scale_y_continuous(limits = c(min(BPRSL$bprs), max(BPRSL$bprs)))

Here it is worth noting that the first two graphs are the same as before, and it shows the subjects whose scores started climbing back up before the treatment weeks were over.

The ANOVA test on the two models BPRS_ref1 and BPRS_ref tells us that the chi-squared value is 3.1712, and the p-value tells us that the fit against the comparison model is within the 95% significance level. This indicates a good fit.

Next we created a vector of the fitted values of the model with the function fitted() and added this new vector as a column in BPRSL. Then we used these fitted values to draw a new plot of BPRSL.

The second set of graphs are interesting. They show us the fitted values of the bprs scores. These graphics underline the fact that the interaction model does not fit the observed data quite as well as it did with the RATS data.

Looking at the graphs of the fitted values, it appears that the slope of declining bprs scores is more acute in the first treatment group, indicating that the first form of treatment was more effective. Out of curiosity, I tried to find out what the treatments were that the subjects received in the two groups, but unfortunately this information was not readily available anywhere. While looking, I did manage to find a pdf of the Davis 2002 book, “Statistical Methods for the Analysis of Repeated Measurements”. If you’re interested in learning more about methods for repeated measurements (i.e. longitudinal data), I recommend taking a look at this volume.

Again, thanks for reading to the end. Sadly, this was the last of the weekly exercises. I hope you’ve enjoyed yourself as much as I have!


Goodbye

Goodbye